Gaussian process learning via Fisher scoring of Vecchia’s approximation

نویسندگان

چکیده

We derive a single-pass algorithm for computing the gradient and Fisher information of Vecchia’s Gaussian process loglikelihood approximation, which provides computationally efficient means applying scoring maximizing loglikelihood. The advantages optimization techniques are demonstrated in numerical examples an application to Argo ocean temperature data. new methods find maximum likelihood estimates much faster more reliably than method that uses only function evaluations, especially when covariance has many parameters. This allows practitioners fit nonstationary models large spatial spatial–temporal datasets.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Gaussian Process Kernels via Hierarchical Bayes

We present a novel method for learning with Gaussian process regression in a hierarchical Bayesian framework. In a first step, kernel matrices on a fixed set of input points are learned from data using a simple and efficient EM algorithm. This step is nonparametric, in that it does not require a parametric form of covariance function. In a second step, kernel functions are fitted to approximate...

متن کامل

Inverse Reinforcement Learning via Deep Gaussian Process

We propose a new approach to inverse reinforcement learning (IRL) based on the deep Gaussian process (deep GP) model, which is capable of learning complicated reward structures with few demonstrations. Our model stacks multiple latent GP layers to learn abstract representations of the state feature space, which is linked to the demonstrations through the Maximum Entropy learning framework. Inco...

متن کامل

Hierarchically-partitioned Gaussian Process Approximation

The Gaussian process (GP) is a simple yet powerful probabilistic framework for various machine learning tasks. However, exact algorithms for learning and prediction are prohibitive to be applied to large datasets due to inherent computational complexity. To overcome this main limitation, various techniques have been proposed, and in particular, local GP algorithms that scales ”truly linearly” w...

متن کامل

Adaptive Gaussian Predictive Process Approximation

We address the issue of knots selection for Gaussian predictive process methodology. Predictive process approximation provides an effective solution to the cubic order computational complexity of Gaussian process models. This approximation crucially depends on a set of points, called knots, at which the original process is retained, while the rest is approximated via a deterministic extrapolati...

متن کامل

Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring

In this paper we address the following question: “Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?”. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Li...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Statistics and Computing

سال: 2021

ISSN: ['0960-3174', '1573-1375']

DOI: https://doi.org/10.1007/s11222-021-09999-1